299 research outputs found

    A Three Step Blind Approach for Improving HPC Systems' Energy Performance

    Get PDF
    International audienceNowadays, there is no doubt that energy consumption has become a limiting factor in the design and operation of high performance computing (HPC) systems. This is evidenced by the rise of efforts both from the academia and the industry to reduce the energy consumption of those systems. Unlike hardware solutions, software initiatives targeting HPC systems' energy consumption reduction despite their effectiveness are often limited for reasons including: (i) the program specific nature of the solution proposed; (ii) the need of deep understanding of applications at hand; (iii) proposed solutions are often difficult to use by novices and/or are designed for single task environments. This paper propose a three step blind system-wide, application independent, fine-grain, and easy to use (user friendly) methodology for improving energy performance of HPC systems. The methodology typically breaks into phase detection, phase characterization, and phase identification and system reconfiguration. And it is blind in the sense that it does not require any knowledge from users. It relies upon reconfigurable capabilities offered by the majority of HPC subsystems -- including the processor, storage, memory, and communication subsystems -- to reduce the overall energy consumption of the system (excluding network equipments) at runtime. We also present an implementation of our methodology through which we demonstrate its effectiveness via static analyses and experiments using benchmarks representative of HPC workloads

    Distributed Management of Massive Data: an Efficient Fine-Grain Data Access Scheme

    Get PDF
    This paper addresses the problem of efficiently storing and accessing massive data blocks in a large-scale distributed environment, while providing efficient fine-grain access to data subsets. This issue is crucial in the context of applications in the field of databases, data mining and multimedia. We propose a data sharing service based on distributed, RAM-based storage of data, while leveraging a DHT-based, natively parallel metadata management scheme. As opposed to the most commonly used grid storage infrastructures that provide mechanisms for explicit data localization and transfer, we provide a transparent access model, where data are accessed through global identifiers. Our proposal has been validated through a prototype implementation whose preliminary evaluation provides promising results

    Towards an efficient process placement policy for MPI applications in multicore environments

    Get PDF
    International audienceThis paper presents a method to efficiently place MPI processes on multicore machines. Since MPI implementations often feature efficient supports for both shared-memory and network communication, an adequate placement policy is a crucial step to improve applications performance. As a case study, we show the results obtained for several NAS computing kernels and explain how the policy influences overall performance. In particular, we found out that a policy merely increasing the intranode communication ratio is not enough and that cache utilization is also an influential factor. A more sophisticated policy (eg. one taking into account the architecture's memory structure) is required to observe performance improvements

    Optimization in a Self-Stabilizing Service Discovery Framework for Large Scale Systems

    Get PDF
    Ability to find and get services is a key requirement in the development of large-scale distributed sys- tems. We consider dynamic and unstable environments, namely Peer-to-Peer (P2P) systems. In previous work, we designed a service discovery solution called Distributed Lexicographic Placement Table (DLPT), based on a hierar- chical overlay structure. A self-stabilizing version was given using the Propagation of Information with Feedback (PIF) paradigm. In this paper, we introduce the self-stabilizing COPIF (for Collaborative PIF) scheme. An algo- rithm is provided with its correctness proof. We use this approach to improve a distributed P2P framework designed for the services discovery. Significantly efficient experimental results are presented

    FASTLens (FAst STatistics for weak Lensing) : Fast method for Weak Lensing Statistics and map making

    Full text link
    With increasingly large data sets, weak lensing measurements are able to measure cosmological parameters with ever greater precision. However this increased accuracy also places greater demands on the statistical tools used to extract the available information. To date, the majority of lensing analyses use the two point-statistics of the cosmic shear field. These can either be studied directly using the two-point correlation function, or in Fourier space, using the power spectrum. But analyzing weak lensing data inevitably involves the masking out of regions or example to remove bright stars from the field. Masking out the stars is common practice but the gaps in the data need proper handling. In this paper, we show how an inpainting technique allows us to properly fill in these gaps with only NlogNN \log N operations, leading to a new image from which we can compute straight forwardly and with a very good accuracy both the pow er spectrum and the bispectrum. We propose then a new method to compute the bispectrum with a polar FFT algorithm, which has the main advantage of avoiding any interpolation in the Fourier domain. Finally we propose a new method for dark matter mass map reconstruction from shear observations which integrates this new inpainting concept. A range of examples based on 3D N-body simulations illustrates the results.Comment: Final version accepted by MNRAS. The FASTLens software is available from the following link : http://irfu.cea.fr/Ast/fastlens.software.ph

    Shutdown Policies with Power Capping for Large Scale Computing Systems

    Get PDF
    International audienceLarge scale distributed systems are expected to consume huge amounts of energy. To solve this issue, shutdown policies constitute an appealing approach able to dynamically adapt the resource set to the actual workload. However, multiple constraints have to be taken into account for such policies to be applied on real infrastructures, in particular the time and energy cost of shutting down and waking up nodes, and power capping to avoid disruption of the system. In this paper, we propose models translating these various constraints into different shutdown policies that can be combined. Our models are validated through simulations on real workload traces and power measurements on real testbeds.

    SPOT: a web-based tool for using biological databases to prioritize SNPs after a genome-wide association study

    Get PDF
    SPOT (http://spot.cgsmd.isi.edu), the SNP prioritization online tool, is a web site for integrating biological databases into the prioritization of single nucleotide polymorphisms (SNPs) for further study after a genome-wide association study (GWAS). Typically, the next step after a GWAS is to genotype the top signals in an independent replication sample. Investigators will often incorporate information from biological databases so that biologically relevant SNPs, such as those in genes related to the phenotype or with potentially non-neutral effects on gene expression such as a splice sites, are given higher priority. We recently introduced the genomic information network (GIN) method for systematically implementing this kind of strategy. The SPOT web site allows users to upload a list of SNPs and GWAS P-values and returns a prioritized list of SNPs using the GIN method. Users can specify candidate genes or genomic regions with custom levels of prioritization. The results can be downloaded or viewed in the browser where users can interactively explore the details of each SNP, including graphical representations of the GIN method. For investigators interested in incorporating biological databases into a post-GWAS SNP selection strategy, the SPOT web tool is an easily implemented and flexible solution

    A Mild Form of SLC29A3 Disorder: A Frameshift Deletion Leads to the Paradoxical Translation of an Otherwise Noncoding mRNA Splice Variant

    Get PDF
    We investigated two siblings with granulomatous histiocytosis prominent in the nasal area, mimicking rhinoscleroma and Rosai-Dorfman syndrome. Genome-wide linkage analysis and whole-exome sequencing identified a homozygous frameshift deletion in SLC29A3, which encodes human equilibrative nucleoside transporter-3 (hENT3). Germline mutations in SLC29A3 have been reported in rare patients with a wide range of overlapping clinical features and inherited disorders including H syndrome, pigmented hypertrichosis with insulin-dependent diabetes, and Faisalabad histiocytosis. With the exception of insulin-dependent diabetes and mild finger and toe contractures in one sibling, the two patients with nasal granulomatous histiocytosis studied here displayed none of the many SLC29A3-associated phenotypes. This mild clinical phenotype probably results from a remarkable genetic mechanism. The SLC29A3 frameshift deletion prevents the expression of the normally coding transcripts. It instead leads to the translation, expression, and function of an otherwise noncoding, out-of-frame mRNA splice variant lacking exon 3 that is eliminated by nonsense-mediated mRNA decay (NMD) in healthy individuals. The mutated isoform differs from the wild-type hENT3 by the modification of 20 residues in exon 2 and the removal of another 28 amino acids in exon 3, which include the second transmembrane domain. As a result, this new isoform displays some functional activity. This mechanism probably accounts for the narrow and mild clinical phenotype of the patients. This study highlights the ‘rescue’ role played by a normally noncoding mRNA splice variant of SLC29A3, uncovering a new mechanism by which frameshift mutations can be hypomorphic

    Altered translation of GATA1 in Diamond-Blackfan anemia

    Get PDF
    Ribosomal protein haploinsufficiency occurs in diverse human diseases including Diamond-Blackfan anemia (DBA)[superscript 1, 2], congenital asplenia[superscript 3] and T cell leukemia[superscript 4]. Yet, how mutations in genes encoding ubiquitously expressed proteins such as these result in cell-type– and tissue-specific defects remains unknown[superscript 5]. Here, we identify mutations in GATA1, encoding the critical hematopoietic transcription factor GATA-binding protein-1, that reduce levels of full-length GATA1 protein and cause DBA in rare instances. We show that ribosomal protein haploinsufficiency, the more common cause of DBA, can lead to decreased GATA1 mRNA translation, possibly resulting from a higher threshold for initiation of translation of this mRNA in comparison with other mRNAs. In primary hematopoietic cells from patients with mutations in RPS19, encoding ribosomal protein S19, the amplitude of a transcriptional signature of GATA1 target genes was globally and specifically reduced, indicating that the activity, but not the mRNA level, of GATA1 is decreased in patients with DBA associated with mutations affecting ribosomal proteins. Moreover, the defective hematopoiesis observed in patients with DBA associated with ribosomal protein haploinsufficiency could be partially overcome by increasing GATA1 protein levels. Our results provide a paradigm by which selective defects in translation due to mutations affecting ubiquitous ribosomal proteins can result in human disease.National Institutes of Health (U.S.) (Grant P01 HL32262)National Institutes of Health (U.S.) (Grant U54 HG003067-09

    Exome Sequencing and Genetic Testing for MODY

    Get PDF
    Context: Genetic testing for monogenic diabetes is important for patient care. Given the extensive genetic and clinical heterogeneity of diabetes, exome sequencing might provide additional diagnostic potential when standard Sanger sequencing-based diagnostics is inconclusive. Objective: The aim of the study was to examine the performance of exome sequencing for a molecular diagnosis of MODY in patients who have undergone conventional diagnostic sequencing of candidate genes with negative results. Research Design and Methods: We performed exome enrichment followed by high-throughput sequencing in nine patients with suspected MODY. They were Sanger sequencing-negative for mutations in the HNF1A, HNF4A, GCK, HNF1B and INS genes. We excluded common, non-coding and synonymous gene variants, and performed in-depth analysis on filtered sequence variants in a pre-defined set of 111 genes implicated in glucose metabolism. Results: On average, we obtained 45 X median coverage of the entire targeted exome and found 199 rare coding variants per individual. We identified 0–4 rare non-synonymous and nonsense variants per individual in our a priori list of 111 candidate genes. Three of the variants were considered pathogenic (in ABCC8, HNF4A and PPARG, respectively), thus exome sequencing led to a genetic diagnosis in at least three of the nine patients. Approximately 91% of known heterozygous SNPs in the target exomes were detected, but we also found low coverage in some key diabetes genes using our current exome sequencing approach. Novel variants in the genes ARAP1, GLIS3, MADD, NOTCH2 and WFS1 need further investigation to reveal their possible role in diabetes. Conclusion: Our results demonstrate that exome sequencing can improve molecular diagnostics of MODY when used as a complement to Sanger sequencing. However, improvements will be needed, especially concerning coverage, before the full potential of exome sequencing can be realized
    corecore